1,439 research outputs found

    Parameter Estimation in Semi-Linear Models Using a Maximal Invariant Likelihood Function

    Get PDF
    In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King (2001) have derived the probability density functions of the maximal invariant statistic for the nonlinear component of these models. Using these density functions as likelihood functions allows us to estimate these models in a two-step process. First the nonlinear component parameters are estimated by maximising the maximal invariant likelihood function. Then the nonlinear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.Maximum likelihood estimation, nonlinear modelling, simulation experiment, two-step estimation.

    Influence Diagnostics in GARCH Processes

    Get PDF
    Influence diagnostics have become an important tool for statistical analysis since the seminal work by Cook (1986). In this paper we present a curvature-based diagnostic to access local influence of minor perturbations on the modified likelihood displacement in a regression model. Using the proposed diagnostic, we study the local influence in the GARCH model under two perturbation schemes which involve, respectively, model perturbation and data perturbation. We find that the curvature-based diagnostic often provides more information on the local influence being examined than the slope-based diagnostic, especially when the GARCH model is under investigation. An empirical study involving GARCH modeling of the percentage daily returns of the NYSE composite index illustrates the effectiveness of the proposed diagnostic and shows that the curvature-based diagnostic may provide information that cannot be uncovered by the slope-based diagnostic. We find that the effect or influence of each observation is not invariant across different perturbation schemes, thus it is advisable to study the local influence under different perturbation schemes through curvature-based diagnostics.Normal curvature, modified likelihood displacement, GARCH models.

    Bayesian semiparametric GARCH models

    Get PDF
    This paper aims to investigate a Bayesian sampling approach to parameter estimation in the semiparametric GARCH model with an unknown conditional error density, which we approximate by a mixture of Gaussian densities centered at individual errors and scaled by a common standard deviation. This mixture density has the form of a kernel density estimator of the errors with its bandwidth being the standard deviation. The proposed investigation is motivated by the lack of robustness in GARCH models with any parametric assumption of the error density for the purpose of error-density based inference such as value-at-risk (VaR) estimation. The contribution of the paper is to construct the likelihood and posterior of model and bandwidth parameters under the proposed mixture error density, and to forecast the one-step out-of-sample density of asset returns. The resulting VaR measure therefore would be distribution-free. Applying the semiparametric GARCH(1,1) model to daily stock-index returns in eight stock markets, we find that this semiparametric GARCH model is favoured against the GARCH(1,1) model with Student t errors for five indices, and that the GARCH model underestimates VaR compared to its semiparametric counterpart. We also investigate the use and benefit of localized bandwidths in the proposed mixture density of the errors.Bayes factors, kernel-form error density, localized bandwidths, Markov chain Monte Carlo, value-at-risk

    Estimation of Asymmetric Box-Cox Stochastic Volatility Models Using MCMC Simulation

    Get PDF
    The stochastic volatility model enjoys great success in modeling the time-varying volatility of asset returns. There are several specifications for volatility including the most popular one which allows logarithmic volatility to follow an autoregressive Gaussian process, known as log-normal stochastic volatility. However, from an econometric viewpoint, we lack a procedure to choose an appropriate functional form for volatility. Instead of the log-normal specification, Yu, Yang and Zhang (2002) assumed Box-Cox transformed volatility follows an autoregressive Gaussian process. However, the empirical evidence they found from currency markets is not strong enough to support the Box-Cox transformation against the alternatives, and it is necessary to seek further empirical evidence from the equity market. This paper develops a sampling algorithm for the Box-Cox stochastic volatility model with a leverage effect incorporated. When the model and the sampling algorithm are applied to the equity market, we find strong empirical evidence to support the Box-Cox transformation of volatility. In addition, the empirical study shows that it is important to incorporate the leverage effect into stochastic volatility models when the volatility of returns on a stock index is under investigation.Box-Cox transformation, leverage effect, sampling algorithm.

    The exact power envelope of tests for a unit root

    Get PDF
    We show how to obtain the exact power envelope of tests for a unit root against trend-stationary alternatives, under normality. This is in contrast to the asymptotic power envelope derived by Elliott, Rothenberg and Stock (1996), and is used to indicate the lack of power of unit root tests in fixed sample sizes. <br><br> Keywords; power envelope, unit root tests <br><br> JEL classification: C12, C22

    Bandwidth Selection for Multivariate Kernel Density Estimation Using MCMC

    Get PDF
    Kernel density estimation for multivariate data is an important technique that has a wide range of applications in econometrics and finance. However, it has received significantly less attention than its univariate counterpart. The lower level of interest in multivariate kernel density estimation is mainly due to the increased difficulty in deriving an optimal data-driven bandwidth as the dimension of data increases. We provide Markov chain Monte Carlo (MCMC) algorithms for estimating optimal bandwidth matrices for multivariate kernel density estimation. Our approach is based on treating the elements of the bandwidth matrix as parameters whose posterior density can be obtained through the likelihood cross-validation criterion. Numerical studies for bivariate data show that the MCMC algorithm generally performs better than the plug-in algorithm under the Kullback-Leibler information criterion, and is as good as the plug-in algorithm under the mean integrated squared errors (MISE) criterion. Numerical studies for 5 dimensional data show that our algorithm is superior to the normal reference rule. Our MCMC algorithm is the first data-driven bandwidth selector for kernel density estimation with more than two variables, and the sampling algorithm involves no increased difficulty as the dimension of data increaseBandwidth matrices; Cross-validation; Kullback-Leibler information; mean integrated squared errors; Sampling algorithms.

    A New Procedure For Multiple Testing Of Econometric Models

    Get PDF
    A significant role for hypothesis testing in econometrics involves diagnostic checking. When checking the adequacy of a chosen model, researchers typically employ a range of diagnostic tests, each of which is designed to detect a particular form of model inadequacy. A major problem is how best to control the overall probability of rejecting the model when it is true and multiple test statistics are used. This paper presents a new multiple testing procedure, which involves checking whether the calculated values of the diagnostic statistics are consistent with the postulated model being true. This is done through a combination of bootstrapping to obtain a multivariate kernel density estimator of the joint density of the test statistics under the null hypothesis and Monte Carlo simulations to obtain a p value using this kernel density. We prove that under some regularity conditions, the estimated p value of our test procedure is a consistent estimate of the true p value. The proposed testing procedure is applied to tests for autocorrelation in an observed time series, for normality, and for model misspecification through the information matrix. We find that our testing procedure has correct or nearly correct sizes and good powers, particular for more complicated testing problems. We believe it is the first good method for calculating the overall p value for a vector of test statistics based on simulation.Bootstrapping, consistency, information matrix test, Markov chain Monte Carlo simulation, multivariate kernel density, normality, serial correlation, test vector

    Bandwidth Selection for Multivariate Kernel Density Estimation Using MCMC

    Get PDF
    We provide Markov chain Monte Carlo (MCMC) algorithms for computing the bandwidth matrix for multivariate kernel density estimation. Our approach is based on treating the elements of the bandwidth matrix as parameters to be estimated, which we do by optimizing the likelihood cross-validation criterion. Numerical results show that the resulting bandwidths are superior to all existing methods; for dimensions greater than two, our algorithm is the first practical method for estimating the optimal bandwidth matrix. Moreover, the MCMC algorithm for bandwidth selection for multivariate data has no increased difficulty as the dimension of data increases.Bandwidth selection, cross-validation, multivariate kernel density estimation, sampling algorithms.

    A Bayesian approach to bandwidth selection for multivariate kernel regression with an application to state-price density estimation.

    Get PDF
    Multivariate kernel regression is an important tool for investigating the relationship between a response and a set of explanatory variables. It is generally accepted that the performance of a kernel regression estimator largely depends on the choice of bandwidth rather than the kernel function. This nonparametric technique has been employed in a number of empirical studies including the state-price density estimation pioneered by Aït-Sahalia and Lo (1998). However, the widespread usefulness of multivariate kernel regression has been limited by the difficulty in computing a data-driven bandwidth. In this paper, we present a Bayesian approach to bandwidth selection for multivariate kernel regression. A Markov chain Monte Carlo algorithm is presented to sample the bandwidth vector and other parameters in a multivariate kernel regression model. A Monte Carlo study shows that the proposed bandwidth selector is more accurate than the rule-of-thumb bandwidth selector known as the normal reference rule according to Scott (1992) and Bowman and Azzalini (1997). The proposed bandwidth selection algorithm is applied to a multivariate kernel regression model that is often used to estimate the state-price density of Arrow-Debreu securities. When applying the proposed method to the S&P 500 index options and the DAX index options, we find that for short-maturity options, the proposed Bayesian bandwidth selector produces an obviously different state-price density from the one produced by using a subjective bandwidth selector discussed in Aït-Sahalia and Lo (1998).Black-Scholes formula, Likelihood, Markov chain Monte Carlo, Posterior density.

    Local Linear Forecasts Using Cubic Smoothing Splines

    Get PDF
    We show how cubic smoothing splines fitted to univariate time series data can be used to obtain local linear forecasts. Our approach is based on a stochastic state space model which allows the use of a likelihood approach for estimating the smoothing parameter, and which enables easy construction of prediction intervals. We show that our model is a special case of an ARIMA(0,2,2) model and we provide a simple upper bound for the smoothing parameter to ensure an invertible model. We also show that the spline model is not a special case of Holt's local linear trend method. Finally we compare the spline forecasts with Holt's forecasts and those obtained from the full ARIMA(0,2,2) model, showing that the restricted parameter space does not impair forecast performance.ARIMA models; exponential smoothing; Holt's local linear forecasts; maximum likelihood estimation; nonparametric regression; smoothing splines; state space model, stochastic trends.
    corecore